翻訳と辞書
Words near each other
・ Simićevo
・ Simjung
・ Simjur al-Dawati
・ Simjurids
・ Simkania
・ Simkania negevensis
・ Simkaniaceae
・ Simkin
・ Simkin de Pio
・ Simkins
・ Similan Islands
・ Similar fact evidence
・ Similar Skin
・ Similaria
・ Similarities (album)
Similarities between Wiener and LMS
・ Similarity
・ Similarity (geometry)
・ Similarity (network science)
・ Similarity (psychology)
・ Similarity heuristic
・ Similarity invariance
・ Similarity learning
・ Similarity matrix
・ Similarity measure
・ Similarity relation (music)
・ Similarity score
・ Similarity search
・ Similarity solution
・ Similarity transformation


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Similarities between Wiener and LMS : ウィキペディア英語版
Similarities between Wiener and LMS

The Least mean squares filter solution converges to the Wiener filter solution, assuming that the unknown system is LTI and the noise is stationary. Both filters can be used to identify the impulse response of an unknown system, knowing only the original input signal and the output of the unknown system. By relaxing the error criterion to reduce current sample error instead of minimizing the total error over all of n, the LMS algorithm can be derived from the Wiener filter.
== Derivation of the Wiener filter for system identification ==

Given a known input signal s(), the output of an unknown LTI system x() can be expressed as:
x() = \sum_^ h_ks() + w()
where h_k is an unknown filter tap coefficients and w() is noise.
The model system \hat(), using a Wiener filter solution with an order N, can be expressed as:
\hat() = \sum_^\hat_ks()
where \hat_k are the filter tap coefficients to be determined.
The error between the model and the unknown system can be expressed as:
e() = x() - \hat()
The total squared error E can be expressed as:
E = \sum_^e()^2
E = \sum_^(x() - \hat())^2
E = \sum_^(x()^2 - 2x()\hat() + \hat()^2)
Use the Minimum mean-square error criterion over all of n by setting its gradient to zero:
\nabla E = 0
which is
\frac = 0
for all i = 0, 1, 2, ..., N-1
\frac = \frac \sum_^\hat() + \hat()^2 ]
Substitute the definition of \hat():
\frac = \frac \sum_^\sum_^\hat_ks() + (\sum_^\hat_ks())^2 ]
Distribute the partial derivative:
\frac = \sum_^ + 2(\sum_^\hat_ks())s() ]
Using the definition of discrete cross-correlation:
R_(i) = \sum_^ x()y()
\frac = -2R_() + 2\sum_^\hat_kR_(- k ) = 0
Rearrange the terms:
R_() = \sum_^\hat_kR_(- k )
for all i = 0, 1, 2, ..., N-1
This system of N equations with N unknowns can be determined.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Similarities between Wiener and LMS」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.